249 research outputs found

    Data-Driven Allocation of Preventive Care With Application to Diabetes Mellitus Type II

    Full text link
    Problem Definition. Increasing costs of healthcare highlight the importance of effective disease prevention. However, decision models for allocating preventive care are lacking. Methodology/Results. In this paper, we develop a data-driven decision model for determining a cost-effective allocation of preventive treatments to patients at risk. Specifically, we combine counterfactual inference, machine learning, and optimization techniques to build a scalable decision model that can exploit high-dimensional medical data, such as the data found in modern electronic health records. Our decision model is evaluated based on electronic health records from 89,191 prediabetic patients. We compare the allocation of preventive treatments (metformin) prescribed by our data-driven decision model with that of current practice. We find that if our approach is applied to the U.S. population, it can yield annual savings of $1.1 billion. Finally, we analyze the cost-effectiveness under varying budget levels. Managerial Implications. Our work supports decision-making in health management, with the goal of achieving effective disease prevention at lower costs. Importantly, our decision model is generic and can thus be used for effective allocation of preventive care for other preventable diseases.Comment: Accepted by Manufacturing & Service Operations Managemen

    Cheap talk and cherry-picking: What climatebert has to say on corporate climate risk disclosures

    Get PDF
    Disclosure of climate-related financial risks greatly helps investors assess companies’ preparedness for climate change. Voluntary disclosures such as those based on the recommendations of the Task Force for Climate-related Financial Disclosures (TCFD) are being hailed as an effective measure for better climate risk management. We ask whether this expectation is justified. We do so by training ClimateBERT, a deep neural language model fine-tuned based on the language model BERT. In analyzing the disclosures of TCFD-supporting firms, ClimateBERT comes to the sobering conclusion that the firms’ TCFD support is mostly cheap talk and that firms cherry-pick to report primarily non-material climate risk information

    Practical Implementation of a Graphics Turing Test

    Get PDF
    We present a practical implementation of a variation of the Turing Test for realistic computer graphics. The test determines whether virtual representations of objects appear as real as genuine objects. Two experiments were conducted wherein a real object and a similar virtual object is presented to test subjects under specific restrictions. A criterion for passing the test is presented based on the probability for the subjects to be unable to recognise a computer generated object as virtual. The experiments show that the specific setup can be used to determine the quality of virtual reality graphics. Based on the results from these experiments, future versions of the Graphics Turing Test could ease the restrictions currently necessary in order to test object telepresence under more general conditions. Furthermore, the test could be used to determine the minimum requirements to achieve object telepresence.</p

    GAM(e) changer or not? An evaluation of interpretable machine learning models based on additive model constraints

    Get PDF
    The number of information systems (IS) studies dealing with explainable artificial intelligence (XAI) is currently exploding as the field demands more transparency about the internal decision logic of machine learning (ML) models. However, most techniques subsumed under XAI provide post-hoc-analytical explanations, which have to be considered with caution as they only use approximations of the underlying ML model. Therefore, our paper investigates a series of intrinsically interpretable ML models and discusses their suitability for the IS community. More specifically, our focus is on advanced extensions of generalized additive models (GAM) in which predictors are modeled independently in a non-linear way to generate shape functions that can capture arbitrary patterns but remain fully interpretable. In our study, we evaluate the prediction qualities of five GAMs as compared to six traditional ML models and assess their visual outputs for model interpretability. On this basis, we investigate their merits and limitations and derive design implications for further improvements

    A Light in the Dark: Deep Learning Practices for Industrial Computer Vision

    Get PDF
    In recent years, large pre-trained deep neural networks (DNNs) have revolutionized the field of computer vision (CV). Although these DNNs have been shown to be very well suited for general image recognition tasks, application in industry is often precluded for three reasons: 1) large pre-trained DNNs are built on hundreds of millions of parameters, making deployment on many devices impossible, 2) the underlying dataset for pre-training consists of general objects, while industrial cases often consist of very specific objects, such as structures on solar wafers, 3) potentially biased pre-trained DNNs raise legal issues for companies. As a remedy, we study neural networks for CV that we train from scratch. For this purpose, we use a real-world case from a solar wafer manufacturer. We find that our neural networks achieve similar performances as pre-trained DNNs, even though they consist of far fewer parameters and do not rely on third-party datasets

    Using a Graphics Turing Test to Evaluate the Effect of Frame Rate and Motion Blur on Telepresence of Animated Objects

    Get PDF
    A limited Graphics Turing Test is used to determine the frame rate that is required to achieve telepresence of an animated object. For low object velocities of 2.25 and 4.5 degrees of visual angle per second at 60 frames per second a rotating object with no added motion blur is able to pass the test. The results of the experiments confirm previous results in psychophysics and show that the Graphics Turing Test is a useful tool in computer graphics. Even with simulated motion blur, our Graphics Turing Test could not be passed with frame rates of 30 and 20 frames per second. Our results suggest that 60 frames per second (instead of 30 frames per second) should be considered the minimum frame rate to achieve object telepresence and that motion blur provides only limited benefits.</p
    • …
    corecore